Introduction to Open Data Science - Course Project

About the project

Write a short description about the course and add a link to your GitHub repository here. This is an R Markdown (.Rmd) file so you should use R Markdown syntax.

# This is a so-called "R chunk" where you can write R code.
#version
date()
## [1] "Thu Nov 26 23:46:25 2020"

First time combining the use of the following tools for this Introduction to Open Data Science (IODS) course:

Feeling very excited to embrace the future!

IODS project GitHub repository: https://github.com/mlammins/IODS-project

“By definition all scientists are data scientists. In my opinion, they are half hacker, half analyst, they use data to build products and find insights. It’s Columbus meet Columbo ― starry-eyed explorers and skeptical detectives.” ―Monica Rogati, Independent Data Science Advisor


2 Regression and model validation

Describe the work you have done this week and summarize your learning.

2.1 Introducing the dataset

The background of the data as described by the author:

Kimmo Vehkalahti: ASSIST 2014 - Phase 3 (end of Part 2), N=183 Course: Johdatus yhteiskuntatilastotieteeseen, syksy 2014 (Introduction to Social Statistics, fall 2014 - in Finnish), international survey of Approaches to Learning, made possible by Teachers’ Academy funding for KV in 2013-2015.

Data collected: 3.12.2014 - 10.1.2015/KV Data created: 14.1.2015/KV, in English 9.4.2015/KV,Florence,Italy Imputation 4.4.2015: only missing information in certain backgrounds, minimal amount of missing values imputed using Phases 1 and 2.

For more information, see https://www.mv.helsinki.fi/home/kvehkala/JYTmooc/JYTOPKYS3-meta.txt

learning2014 <- read.csv("./data/learning2014.csv") # reading the analysis data
dim(learning2014) # number of rows and columns
## [1] 166   7
str(learning2014) # type of data
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : chr  "F" "M" "F" "M" ...
##  $ age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: num  3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ points  : int  25 12 24 10 22 21 21 31 24 26 ...

The learning dataset used in this exercise consists of 166 observations and 7 variables. Variables deep, stra and surf were calculated based on several Likert scale questions (from 1 to 5). Variable names and short descriptions:

  1. gender - Gender: M (Male), F (Female)
  2. age - Age (in years) derived from the date of birth
  3. attitude - Global attitude toward statistics
  4. deep - Tendency to deep learnign
  5. stra - Tendency to strategic learning
  6. surf - Tendency to surface learning
  7. points - Exam points

2.2 Graphical overview and summary of the data

To get the idea of the data, let’s make a graphical summary of the variables with females (red) and males (blue):

library(ggplot2);
library(GGally);
p <- ggpairs(learning2014, mapping = aes(col=gender, alpha=0.3), lower = list(combo = wrap("facethist", bins = 20)), upper = list(continuous = wrap("cor", family="sans")))
p # graphical summary

summary(learning2014) # numerical summary
##     gender               age           attitude          deep      
##  Length:166         Min.   :17.00   Min.   :1.400   Min.   :1.583  
##  Class :character   1st Qu.:21.00   1st Qu.:2.600   1st Qu.:3.333  
##  Mode  :character   Median :22.00   Median :3.200   Median :3.667  
##                     Mean   :25.51   Mean   :3.143   Mean   :3.680  
##                     3rd Qu.:27.00   3rd Qu.:3.700   3rd Qu.:4.083  
##                     Max.   :55.00   Max.   :5.000   Max.   :4.917  
##       stra            surf           points     
##  Min.   :1.250   Min.   :1.583   Min.   : 7.00  
##  1st Qu.:2.625   1st Qu.:2.417   1st Qu.:19.00  
##  Median :3.188   Median :2.833   Median :23.00  
##  Mean   :3.121   Mean   :2.787   Mean   :22.72  
##  3rd Qu.:3.625   3rd Qu.:3.167   3rd Qu.:27.75  
##  Max.   :5.000   Max.   :4.333   Max.   :33.00

Taking a look at the graphical overview, the distribution for female (red) and male (blue) values seems to be relatively similar in all categories. Females seem to have a slightly higher values in surface learning (surf) and strategic learning (stra) and slighly lower values in attitude. The numerical summary of all data (not filtered by gender) shows that the majority of participants is young (under 30 years of age).

2.3 Fitting a regression model

Next, let’s test if there is correlation between attitude, strategic learning (stra) and surface learning tendency (surf) with the number of points obtained from the exam:

library(ggplot2)
# create a regression model with multiple explanatory variables
my_model <- lm(points ~ attitude + stra + surf, data = learning2014)
# print out a summary of the model
summary(my_model)
## 
## Call:
## lm(formula = points ~ attitude + stra + surf, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.1550  -3.4346   0.5156   3.6401  10.8952 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.0171     3.6837   2.991  0.00322 ** 
## attitude      3.3952     0.5741   5.913 1.93e-08 ***
## stra          0.8531     0.5416   1.575  0.11716    
## surf         -0.5861     0.8014  -0.731  0.46563    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared:  0.2074, Adjusted R-squared:  0.1927 
## F-statistic: 14.13 on 3 and 162 DF,  p-value: 3.156e-08

Coefficients table gives an interpretation of the model. Intercept giving the “baseline” for the points in exam, it seems that attitude has the biggest correlation between exam points. If attitude rises one unit, then exam points increase by 3.4 units given that all other variables stay the same. The effect of stra and surf is below one. The importance of attitude can also be seen in the last column where the statistical significance of the coefficient is given. The three stars *** indicate high statistical significance, i.e. the coefficient differs from zero and thus has a relationship with the target variable. In general, the P-test value shown here tells the propability of the coefficient being zero.

Let’s drop surf (highest propability) to see if it improves the fitting:

# create a regression model with multiple explanatory variables
my_model2 <- lm(points ~ attitude + stra, data = learning2014)
# print out a summary of the model
summary(my_model2)
## 
## Call:
## lm(formula = points ~ attitude + stra, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.6436  -3.3113   0.5575   3.7928  10.9295 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   8.9729     2.3959   3.745  0.00025 ***
## attitude      3.4658     0.5652   6.132 6.31e-09 ***
## stra          0.9137     0.5345   1.709  0.08927 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.289 on 163 degrees of freedom
## Multiple R-squared:  0.2048, Adjusted R-squared:  0.1951 
## F-statistic: 20.99 on 2 and 163 DF,  p-value: 7.734e-09

Leaving out surface learning increases the significance of the remaining variables (decreased values in P-value column). However, the value for strategic learning (stra) is still relatively high (debatable). As it is less than < 0.10, let’s keep it in the model. So we have found our final model.

2.4 Analyzing the fitted model

Our final model is thus

exam points = 8.97 + 3.47 x attitude + 0.91 x stra

This means that with one unit increase in attitude exam points increase by 3.47 points (stra being unchanged) and with one unit increase in strategic learning (stra) the exam points increase by 0.91. The baseline for exam points is 8.97 (the y-intercept). Now this is the systemic part (no error).

The numbers at the end of the multiple regression summary can be explained succinctly as:

  • Residual Standard Error: Standard deviation of residuals / errors of the regression model.
  • Multiple R-Squared: Percent of the variance of exam intact after subtracting the error of the model.
  • Adjusted R-Squared: Same as multiple R-Squared but takes into account the number of samples and variables.

So adjusted R-squared tells how well the model fits the data, i.e. the percentage of the dependent variable variation that the linear model explains (ranging between 0 and 1). The R-squared seen here (roughly 0.20) is quite low so there is probably some problematic patterns in the residual plots. At least the residuals are not very close to the regression line. That’s why you cannot rely on the R-squared number alone but a visual inspection is a must!

2.5 Model validity and diagnostic plots

# drawing diagnostic plots using the plot() function. Choose the plots 1, 2 and 5:
par(mfrow = c(2,2))
plot(my_model2, which=c(1,2,5))

A statistical model always include assumptions which describe the data generation process. In this linear regression case we assume

  • linearity, i.e. the target variable can be modelled as a linear combination of the model parameters
  • the errors are normally distributed, are not correlated and have constant variance (the size of the given error does not depend on the explanatory variables)

These assumptions can be checked through analyzing residuals.

  • Residuals vs. model predictions i.e. fitted values: for analyzing constant variance of errors. “Any pattern in the plot implies a problem with the assumptions.”
  • QQ-plot: for analyzing the normality of errors. “The better residuals are located on the identity line, the better the normality assumption holds.”
  • Leverage: for analyzing the impact of a single observation on the model. “The outliers have high impact.”

In residuals vs. fitted we can see the source of the low R-squared value: the residuals are not very close to the regression line (note y-axis scale). However, they seem to describe the trend in observations well since there is no noticeable pattern. QQ-plot also shows that the residuals are nicely located on the line, i.e. the normality assumption holds. Residuals vs. leverage shows some points at the right hand side of the plot but the x-axis scaling (max leverage value of 0.05) is still quite small, so no problems here either. All in all, it seems our multiple regression model nicely describes the data and all the assumptions hold.

With the model validated, now we could use our regression model to predict the target variable behavior!


3 Logistic regression

Describe the work you have done this week and summarize your learning.

3.1 Introducing the joined dataset

The dataset describes student achievement in secondary education of two Portuguese schools. The dataset is joined from two datasets regarding the performance in two distinct subjects: Mathematics (mat) and Portuguese language (por). The data attributes include student grades, demographic, social and school related features (and especially alcohol consumption) and it was collected by using school reports and questionnaires.

More information: https://archive.ics.uci.edu/ml/datasets/Student+Performance

alc <- read.csv("./data/alc.csv") # reading the analysis data
dim(alc) # number of rows and columns
## [1] 382  35
str(alc) # type of data
## 'data.frame':    382 obs. of  35 variables:
##  $ school    : chr  "GP" "GP" "GP" "GP" ...
##  $ sex       : chr  "F" "F" "F" "F" ...
##  $ age       : int  18 17 15 15 16 16 16 17 15 15 ...
##  $ address   : chr  "U" "U" "U" "U" ...
##  $ famsize   : chr  "GT3" "GT3" "LE3" "GT3" ...
##  $ Pstatus   : chr  "A" "T" "T" "T" ...
##  $ Medu      : int  4 1 1 4 3 4 2 4 3 3 ...
##  $ Fedu      : int  4 1 1 2 3 3 2 4 2 4 ...
##  $ Mjob      : chr  "at_home" "at_home" "at_home" "health" ...
##  $ Fjob      : chr  "teacher" "other" "other" "services" ...
##  $ reason    : chr  "course" "course" "other" "home" ...
##  $ nursery   : chr  "yes" "no" "yes" "yes" ...
##  $ internet  : chr  "no" "yes" "yes" "yes" ...
##  $ guardian  : chr  "mother" "father" "mother" "mother" ...
##  $ traveltime: int  2 1 1 1 1 1 1 2 1 1 ...
##  $ studytime : int  2 2 2 3 2 2 2 2 2 2 ...
##  $ failures  : int  0 0 2 0 0 0 0 0 0 0 ...
##  $ schoolsup : chr  "yes" "no" "yes" "no" ...
##  $ famsup    : chr  "no" "yes" "no" "yes" ...
##  $ paid      : chr  "no" "no" "yes" "yes" ...
##  $ activities: chr  "no" "no" "no" "yes" ...
##  $ higher    : chr  "yes" "yes" "yes" "yes" ...
##  $ romantic  : chr  "no" "no" "no" "yes" ...
##  $ famrel    : int  4 5 4 3 4 5 4 4 4 5 ...
##  $ freetime  : int  3 3 3 2 3 4 4 1 2 5 ...
##  $ goout     : int  4 3 2 2 2 2 4 4 2 1 ...
##  $ Dalc      : int  1 1 2 1 1 1 1 1 1 1 ...
##  $ Walc      : int  1 1 3 1 2 2 1 1 1 1 ...
##  $ health    : int  3 3 3 5 5 5 3 1 1 5 ...
##  $ absences  : int  5 3 8 1 2 8 0 4 0 0 ...
##  $ G1        : int  2 7 10 14 8 14 12 8 16 13 ...
##  $ G2        : int  8 8 10 14 12 14 12 9 17 14 ...
##  $ G3        : int  8 8 11 14 12 14 12 10 18 14 ...
##  $ alc_use   : num  1 1 2.5 1 1.5 1.5 1 1 1 1 ...
##  $ high_use  : logi  FALSE FALSE TRUE FALSE FALSE FALSE ...

The final joined dataset has 382 observations and 35 variables consisting of only unique individuals. The datasets were joined by using the 13 student identifier variables: “school”, “sex”, “age”, “address”, “famsize”, “Pstatus”, “Medu”, “Fedu”, “Mjob”, “Fjob”, “reason”, “nursery” and “internet”. Only students present in both datasets were kept. The variables not used for joining the two data have been combined by averaging (including the grade variables). A more detailed information about the variables is presented below (possible values in parenthesis):

  • 1 school - student’s school (binary: ‘GP’ - Gabriel Pereira or ‘MS’ - Mousinho da Silveira)
  • 2 sex - student’s sex (binary: ‘F’ - female or ‘M’ - male)
  • 3 age - student’s age (numeric: from 15 to 22)
  • 4 address - student’s home address type (binary: ‘U’ - urban or ‘R’ - rural)
  • 5 famsize - family size (binary: ‘LE3’ - less or equal to 3 or ‘GT3’ - greater than 3)
  • 6 Pstatus - parent’s cohabitation status (binary: ‘T’ - living together or ‘A’ - apart)
  • 7 Medu - mother’s education (numeric: 0 - none, 1 - primary education (4th grade), 2 5th to 9th grade, 3 secondary education or 4 higher education)
  • 8 Fedu - father’s education (numeric: 0 - none, 1 - primary education (4th grade), 2 5th to 9th grade, 3 secondary education or 4 higher education)
  • 9 Mjob - mother’s job (nominal: ‘teacher’, ‘health’ care related, civil ‘services’ (e.g. administrative or police), ‘at_home’ or ‘other’)
  • 10 Fjob - father’s job (nominal: ‘teacher’, ‘health’ care related, civil ‘services’ (e.g. administrative or police), ‘at_home’ or ‘other’)
  • 11 reason - reason to choose this school (nominal: close to ‘home’, school ‘reputation’, ‘course’ preference or ‘other’)
  • 12 nursery - attended nursery school (binary: yes or no)
  • 13 internet - Internet access at home (binary: yes or no)
  • 14 guardian - student’s guardian (nominal: ‘mother’, ‘father’ or ‘other’)
  • 15 traveltime - home to school travel time (numeric: 1 - <15 min., 2 - 15 to 30 min., 3 - 30 min. to 1 hour, or 4 - >1 hour)
  • 16 studytime - weekly study time (numeric: 1 - <2 hours, 2 - 2 to 5 hours, 3 - 5 to 10 hours, or 4 - >10 hours)
  • 17 failures - number of past class failures (numeric: n if 1<=n<3, else 4)
  • 18 schoolsup - extra educational support (binary: yes or no)
  • 19 famsup - family educational support (binary: yes or no)
  • 20 paid - extra paid classes within the course subject (Math or Portuguese) (binary: yes or no)
  • 21 activities - extra-curricular activities (binary: yes or no)
  • 22 higher - wants to take higher education (binary: yes or no)
  • 23 romantic - with a romantic relationship (binary: yes or no)
  • 24 famrel - quality of family relationships (numeric: from 1 - very bad to 5 - excellent)
  • 25 freetime - free time after school (numeric: from 1 - very low to 5 - very high)
  • 26 goout - going out with friends (numeric: from 1 - very low to 5 - very high)
  • 27 Dalc - workday alcohol consumption (numeric: from 1 - very low to 5 - very high)
  • 28 Walc - weekend alcohol consumption (numeric: from 1 - very low to 5 - very high)
  • 29 health - current health status (numeric: from 1 - very bad to 5 - very good)
  • 30 absences - number of school absences (numeric: from 0 to 93)

The grades G1, G2 and G3 are related to the course subject, Math or Portuguese. Variables alc_use and high_use were added to the original datasets:

  • 31 G1 - first period grade (numeric: from 0 to 20)
  • 32 G2 - second period grade (numeric: from 0 to 20)
  • 33 G3 - final grade (numeric: from 0 to 20, output target)
  • 34 alc_use - average of Dalc and Walc (numeric: from 1 - very low to 5 - very high)
  • 35 high_use - high alcohol use status (binary: TRUE (if alc_use is higher than 2) or FALSE)

3.2 Purpose of the analysis and study hypothesis

The purpose of the analysis is to study the relationships between high/low alcohol consumption and some of the other variables in the data. Out of the many variables present, parent’s cohabitation status, mother’s education, quality of family relationships and the number of school absences were chosen. Thus the study hypothesis are as follows:

  • Hypothesis 1: If the parents are living apart (Pstatus=‘A’), the alcohol consumption (high_use) is higher
  • Hypothesis 2: The higher the mother’s education (Medu), the lower the alcohol consumption (high_use)
  • Hypothesis 3: The higher the quality of family relationships (famrel), the lower the alcohol consumption (high_use)
  • Hypothesis 4: The higher the number of school absences (absences), the higher the alcohol consumption (high_use)

Note! Now the target variable high_use is a binary variable (TRUE=1, FALSE=0) so we must use logistic regression.

3.3 Numerical and graphical exploration

Before making logistic regression, let’s explore the distribution of chosen variables and their connection to the target variable numerically and graphically. First, let’s take a look at variable distributions:

library(dplyr);
# use dplyr to make a smaller dataset and include all chosen variables
alc_test <- select(alc, Pstatus, Medu, famrel, absences, alc_use, high_use)
# change Pstatus from char to factor
alc_test$Pstatus <- as.factor(alc_test$Pstatus)
# numerical summary of chosen variables
summary(alc_test[-c(5,6)])
##  Pstatus      Medu           famrel         absences   
##  A: 38   Min.   :0.000   Min.   :1.000   Min.   : 0.0  
##  T:344   1st Qu.:2.000   1st Qu.:4.000   1st Qu.: 1.0  
##          Median :3.000   Median :4.000   Median : 3.0  
##          Mean   :2.806   Mean   :3.937   Mean   : 4.5  
##          3rd Qu.:4.000   3rd Qu.:5.000   3rd Qu.: 6.0  
##          Max.   :4.000   Max.   :5.000   Max.   :45.0
# graphical exploration of variable distribution
par(mfrow = c(2,2))
#
barplot(table(alc_test$Pstatus), main="Distribution of Pstatus")
barplot(table(alc_test$Medu), main="Distribution of Medu")
barplot(table(alc_test$famrel), main="Distribution of famrel")
barplot(table(alc_test$absences), main="Distribution of absences")

From the data we can see that the vast majority of participants are living together with their parents (T). Also, most of the participants have good family relations (famrel=4) and the number of absences is relatively small (75 % less than 6). Mother’s education is almost evenly distributed among variable values not counting zero.

Out of curiosity, let’s see how the alcohol consumption is distributed.

summary(alc_test[c(5,6)])
##     alc_use       high_use      
##  Min.   :1.000   Mode :logical  
##  1st Qu.:1.000   FALSE:268      
##  Median :1.500   TRUE :114      
##  Mean   :1.889                  
##  3rd Qu.:2.500                  
##  Max.   :5.000
par(mfrow = c(1,2))
barplot(table(alc_test$alc_use), main="Distribution of alc_use")
barplot(table(alc_test$high_use), main="Distribution of high_use")

It seems that only about one third of the participants use alcohol in high volumes. To better grasp the situation, here also the alcohol use (alc_use, numeric from 1 to 5) is shown instead of only the binary variable high use (high_use, TRUE if alc_use > 2).

Now let’s see what is the relation of our chosen variables to alcohol consumption. Here also the numerical alc_use is used.

par(mfrow = c(2,2))
boxplot(alc_use ~ Pstatus, data = alc)
boxplot(alc_use ~ Medu, data=alc)
boxplot(alc_use ~ famrel, data=alc)
boxplot(alc_use ~ absences, data=alc)

These boxplots give a rough idea of the relationships between variables and the target variable:

  • Pstatus vs. alc_use: Roughly the same Q1 to Q3 area, but the median is smaller in A group. This is contrary to what was hypothesized.
  • Medu vs. alc_use: Certainly group 0 has the greatest median for alc_use and the trend seems to go down as Medu increases, but group 3 and 4 do have big variability in values (Q1-Q3 cover a large area).
  • famrel vs. alc_use: Nothing conclusive on the hypothesis. Excellent family relations (famrel=5) seems to protect from high alcohol consumption (smaller median, smaller Q1-Q3 area).
  • absences vs. alc_use: Alcohol use seems to increase with absences which supports the hypothesis.

3.4 Logistic regression - the model

Time to form a mathematically rigorous model using logistic regression! Note that Pstatus will be addressed here as factor. To summarize:

Target variable = high_use

Chosen variables: Pstatus, Medu, famrel, absences

m <- glm(high_use ~ Pstatus+Medu+famrel+absences, data = alc_test, family ="binomial")
# print out a summary of the model
summary(m)
## 
## Call:
## glm(formula = high_use ~ Pstatus + Medu + famrel + absences, 
##     family = "binomial", data = alc_test)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.2741  -0.8107  -0.7076   1.1985   1.8620  
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -0.453190   0.697544  -0.650 0.515890    
## PstatusT     0.168029   0.397078   0.423 0.672176    
## Medu        -0.008959   0.107430  -0.083 0.933542    
## famrel      -0.243649   0.124088  -1.964 0.049585 *  
## absences     0.088049   0.022951   3.836 0.000125 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 443.46  on 377  degrees of freedom
## AIC: 453.46
## 
## Number of Fisher Scoring iterations: 4

The most important insight from the model summary is the coefficient section. Using the estimated coefficients, the obtained model is

high_use \(= -0.45+0.17 \times PstatusT-0.01 \times Medu-0.24 \times famrel+0.09 \times absences\)

Since the target valuable high_use is binary, it has only values 0 (as FALSE or “failure”) and 1 (as TRUE or "success) in modelling sense. The coefficients of the fitted model can be thus interpreted as

  • PstatusP: if Pstatus=A, then PstatusT=0 and the coefficient is -0.45 meaning parents living apart actually lessens the probability of high alcohol use. If Pstatus=T, then the effect is -0.45+0.17=-0.28 which is still negative but not as much as with Pstatus=A.
  • Medu: higher level of mother’s education lessens the probability for high alcohol use but the effect is quite negligible.
  • famrel: negative coefficient means high value for famrel lessens the probability of high_use.
  • absences: positive value for coefficient means that increased number of absences increases the probability of high_use

Of the variables, only famrel and absences were found to be statistically significant on 0.05 and 0.001 levels, respectively.


The model coefficients can also be interpreted as odds ratio.

Odds: the ratio of expected “successes” to “failures”, i.e. \(\frac{p}{1-p}\) with value ranging from 0 to infinity.

So higher odds correspond to a higher probability of success. They are an alternative way of expressing probabilities. Let’s calculate the odds ratios and confidence intervals for the coefficients.

# print out the coefficients of the model
coef(m)
##  (Intercept)     PstatusT         Medu       famrel     absences 
## -0.453190280  0.168028965 -0.008958589 -0.243649169  0.088049068
# compute odds ratios (OR)
OR <- coef(m) %>% exp
# compute confidence intervals (CI)
CI <- confint(m) %>% exp
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
##                    OR     2.5 %   97.5 %
## (Intercept) 0.6355972 0.1580208 2.462028
## PstatusT    1.1829709 0.5566928 2.673024
## Medu        0.9910814 0.8032307 1.224977
## famrel      0.7837626 0.6137127 1.000047
## absences    1.0920417 1.0461954 1.144793

Odds ratio can be used to quantify the relationship between our variable and the target variable. Odds higher than 1 mean that the variable is positively associated with “success”. The odds ratio of our variables can be interpreted as

  • Pstatus: is connected to intercept and PstatusT as described above. Now the odds ratio for intercept is less than 1 meaning negative association with high alcohol use. On the other hand, PstatusT has positive association (greater than 1). But both have a large 95 % CI in which number 1 is included, so the significance is not strong.
  • Medu: the variable is very close to 1 (neither positive nor negative association) with 95 % CI including the number 1 so no significance.
  • famrel: less than 1 with 95 % CI upper end just barely greater than 1. Thus has a somewhat significant negative association with the target variable.
  • absences: greater than 1 and 95 % CI interval entirely above 1. Positive association with strong significance.

Comparison of the results with our hypotheses:

  • Hypothesis 1: Pstatus=A implies high_use. The results are completely contrary; parents living together has a slightly bigger probability of high alcohol use than parents living apart.
  • Hypothesis 2: The higher Medu implies no high_use. The results show that while high mother’s education does correlate with lower alcohol use, the effect is negligible.
  • Hypothesis 3: Higher famrel implies no high_use. This hypothesis holds; higher quality of family relations does imply lower alcohol consumption.
  • Hypothesis 4: Higher absences implies high_use. The hypothesis hold. This is, in fact, the most significant independent variable in the model.

3.5 Predictive power of the model

Let’s improve our model by discarding the most insignificant variables, Pstatus and Medu. Now the final model has only variables famrel and absences.

# fix the model
m2 <- glm(high_use ~ famrel+absences, data = alc_test, family ="binomial")
# print out a summary of the model
summary(m2) 
## 
## Call:
## glm(formula = high_use ~ famrel + absences, family = "binomial", 
##     data = alc_test)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.3139  -0.8028  -0.7125   1.2100   1.8605  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -0.33027    0.50783  -0.650 0.515461    
## famrel      -0.24109    0.12365  -1.950 0.051211 .  
## absences     0.08668    0.02270   3.819 0.000134 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 443.66  on 379  degrees of freedom
## AIC: 449.66
## 
## Number of Fisher Scoring iterations: 4

It seems that the significance of variables famrel and absences has increased slightly compared to the original model. However, the Pr(>|z|) values are almost the same and we have a simpler model than before, so this is an improvement.

Let’s use the improved model and predict high_use values of an individual.

# predict() the probability of high_use
probabilities <- predict(m2, type = "response")
# add the predicted probabilities to 'alc_test'
alc_test <- mutate(alc_test, probability = probabilities)
# use the probabilities to make a prediction of high_use
alc_test <- mutate(alc_test, prediction = probabilities>0.5)
# see the last ten original classes, predicted probabilities, and class predictions
select(alc_test, famrel,absences, high_use, probability, prediction) %>% tail(10)
##     famrel absences high_use probability prediction
## 373      4        0    FALSE   0.2150733      FALSE
## 374      5        7     TRUE   0.2831342      FALSE
## 375      5        1    FALSE   0.1901522      FALSE
## 376      4        6    FALSE   0.3154941      FALSE
## 377      5        2    FALSE   0.2038593      FALSE
## 378      4        2    FALSE   0.2457776      FALSE
## 379      2        2    FALSE   0.3454525      FALSE
## 380      1        3    FALSE   0.4227907      FALSE
## 381      2        4     TRUE   0.3856256      FALSE
## 382      4        2     TRUE   0.2457776      FALSE
# tabulate the target variable versus the predictions
select(alc_test, high_use, prediction) %>% table()
##         prediction
## high_use FALSE TRUE
##    FALSE   259    9
##    TRUE    101   13
library(ggplot2)
# initialize a plot of 'high_use' versus 'probability' in 'alc_test'
g <- ggplot(alc_test, aes(x =probability, y = high_use, col=prediction))

# define the geom as points and draw the plot
g+geom_point()

# tabulate the target variable versus the predictions
table(high_use = alc_test$high_use, prediction = alc_test$prediction) %>% prop.table() %>% addmargins()
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.67801047 0.02356021 0.70157068
##    TRUE  0.26439791 0.03403141 0.29842932
##    Sum   0.94240838 0.05759162 1.00000000
# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc_test$high_use, prob = alc_test$probability)
## [1] 0.2879581

To interpret the result, the model ends up predicting wrong 29 % of the time. There is much room for improvement here although it is better than just guessing (50-50 chance). So the model is definitely better than nothing.

3.6 10-fold cross-validation of the model

# K-fold cross-validation
library(boot)
cv <- cv.glm(data = alc_test, cost = loss_func, glmfit = m2, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2905759

The prediction error obtained here is larger than in the model introduced in DataCamp (prediction error of 0.26). Choosing more significant variables in the model would improve the predictions.


4 Clustering and classification

Describe the work you have done this week and summarize your learning.

4.1 Introducing the dataset

This chapter’s dataset consists of housing values in suburbs of Boston (the Boston data from the MASS package).

# access the MASS package
library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
# load the data
data("Boston")
# explore the dataset
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...

The dataset contains 506 observations in 14 variables:

  1. crim – per capita crime rate by town (numeric: from 0.00632 to 88.9762)
  2. zn – proportion of residential land zoned for lots over 25,000 sq.ft. (numeric: from 0 to 100)
  3. indus – proportion of non-retail business acres per town (numeric: from 0.46 to 27.74)
  4. chas – Charles River dummy variable (binary: 1 if tract bounds river, 0 otherwise)
  5. nox – nitrogen oxides concentration (parts per 10 million) (numeric: from 0.385 to 0.871)
  6. rm – average number of rooms per dwelling (numeric: from 3.561 to 8.78)
  7. age – proportion of owner-occupied units built prior to 1940 (numeric: 2.9 to 100)
  8. dis – weighted mean of distances to five Boston employment centres (numeric: from 1.13 to 12.127)
  9. rad – index of accessibility to radial highways (numeric: from 1 to 24)
  10. tax – full-value property-tax rate per $10,000 (numeric: from 187 to 711)
  11. ptratio – pupil-teacher ratio by town (numeric: from 12.6 to 22)
  12. black – 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town (numeric: from 0.32 to 396.9)
  13. lstat – lower status of the population (percent) (numeric: from 1.73 to 37.97)
  14. medv – median value of owner-occupied homes in $1000s (numeric: from 5 to 50)

4.2 Graphical overview and summary of the data

The function pairs gives a rough visual idea of the data while summary describes the variables numerically.

library(dplyr)
library(corrplot)
pairs(Boston) # done as in Data Camp, but it is so small, almost impossible to see anything

summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

Based on the numerical summary, it seems variables crim, zn, indus,dis, rad have quite low values while rm, age and black seem to have higher values. The output of pairs is very difficult to read, so let’s try calculating the correlation matrix to see the relationships between the variables. The (rather large) correlation matrix is easier to interpret when visualized with corrplot-function.

# calculate the correlation matrix and round it
cor_matrix<-cor(Boston) 

# print the correlation matrix, kable for nicer looking table
knitr::kable(
cor_matrix %>% round(digits=2)
)
crim zn indus chas nox rm age dis rad tax ptratio black lstat medv
crim 1.00 -0.20 0.41 -0.06 0.42 -0.22 0.35 -0.38 0.63 0.58 0.29 -0.39 0.46 -0.39
zn -0.20 1.00 -0.53 -0.04 -0.52 0.31 -0.57 0.66 -0.31 -0.31 -0.39 0.18 -0.41 0.36
indus 0.41 -0.53 1.00 0.06 0.76 -0.39 0.64 -0.71 0.60 0.72 0.38 -0.36 0.60 -0.48
chas -0.06 -0.04 0.06 1.00 0.09 0.09 0.09 -0.10 -0.01 -0.04 -0.12 0.05 -0.05 0.18
nox 0.42 -0.52 0.76 0.09 1.00 -0.30 0.73 -0.77 0.61 0.67 0.19 -0.38 0.59 -0.43
rm -0.22 0.31 -0.39 0.09 -0.30 1.00 -0.24 0.21 -0.21 -0.29 -0.36 0.13 -0.61 0.70
age 0.35 -0.57 0.64 0.09 0.73 -0.24 1.00 -0.75 0.46 0.51 0.26 -0.27 0.60 -0.38
dis -0.38 0.66 -0.71 -0.10 -0.77 0.21 -0.75 1.00 -0.49 -0.53 -0.23 0.29 -0.50 0.25
rad 0.63 -0.31 0.60 -0.01 0.61 -0.21 0.46 -0.49 1.00 0.91 0.46 -0.44 0.49 -0.38
tax 0.58 -0.31 0.72 -0.04 0.67 -0.29 0.51 -0.53 0.91 1.00 0.46 -0.44 0.54 -0.47
ptratio 0.29 -0.39 0.38 -0.12 0.19 -0.36 0.26 -0.23 0.46 0.46 1.00 -0.18 0.37 -0.51
black -0.39 0.18 -0.36 0.05 -0.38 0.13 -0.27 0.29 -0.44 -0.44 -0.18 1.00 -0.37 0.33
lstat 0.46 -0.41 0.60 -0.05 0.59 -0.61 0.60 -0.50 0.49 0.54 0.37 -0.37 1.00 -0.74
medv -0.39 0.36 -0.48 0.18 -0.43 0.70 -0.38 0.25 -0.38 -0.47 -0.51 0.33 -0.74 1.00
# visualize the correlation matrix
corrplot(cor_matrix, method="circle", type="upper", cl.pos="b", tl.pos="d", tl.cex=0.6)

From the correlation matrix we can see that there is a strong

  • negative correlation between distance and proportion of units built before 1940, nitrogen oxide concentration and proportion of non-retail business acres (dis&age, dis&nox, dis&indus) as well as median value of home and lower status of the population (medv&lstat).

  • positive correlation between property tax-rate and accessibility to radial high ways (tax&rad) among others.

4.3 Standardizing and splitting data into train and test sets

Standardization of the data is useful when the data has large differences between their ranges or when the data is measured in different measurement units. Let’s scale the Boston data by subtracting the column means from the corresponding columns and divide the difference with standard deviation: \[scaled(x) = \frac{x-mean(x)}{sd(x)}.\] This is one of the most popular way of standardizing data, the Z-score. Now all variables have a mean of zero and a standard deviation of one. Thus they are on the same scale.

# center and standardize variables
boston_scaled <- scale(Boston)
# summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865

Next we shall create a categorical variable crime (per capita crime rate by town) from the standardized data set. Let’s cut the variable by quantiles to get the high, low and middle rates of crime into their own categories. Finally, let’s drop the old crime rate variable from the data set.

# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix" "array"
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
##           0%          25%          50%          75%         100% 
## -0.419366929 -0.410563278 -0.390280295  0.007389247  9.924109610
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label=c("low","med_low","med_high","high"))
# look at the table of the new factor crime
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)
# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

In order to test the predictive power of a statistical method, let’s divide the scaled Boston data set randomly into a training set (80 %) and a test set (20 %).

# number of rows in the Boston dataset 
n <- nrow(Boston)
# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)

# create train set
train <- boston_scaled[ind,]
# create test set 
test <- boston_scaled[-ind,]

4.4 Fitting the LDA

The standardization was done to satisfy the conditions for using the linear discriminant analysis (LDA):

  • variables are normally distributed (on condition of the classes)

  • each variable has the same variance.

The general idea is to reduce the dimensions by removing redundant and dependent features by transforming the features from higher dimensional space to a space with lower dimensions.

Now, let’s fit LDA on the train set with the newly-created crime as the target variable and all other variables as predictor variables. The result can be visualised by the LDA (bi)plot.

# linear discriminant analysis
lda.fit <- lda(crime ~., data = train)

# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2574257 0.2475248 0.2574257 0.2376238 
## 
## Group means:
##                   zn      indus        chas        nox          rm        age
## low       0.99531926 -0.9445087 -0.08304540 -0.8918876  0.42890499 -0.9019321
## med_low  -0.03874534 -0.3064388  0.04263895 -0.5778167 -0.07079871 -0.3662280
## med_high -0.37839823  0.1613017  0.21980846  0.3748742  0.12772335  0.3808770
## high     -0.48724019  1.0172418 -0.10828322  1.0507016 -0.40218532  0.8210847
##                 dis        rad        tax     ptratio      black        lstat
## low       0.9495201 -0.6914416 -0.6973278 -0.47956214  0.3735772 -0.773557933
## med_low   0.3979790 -0.5374145 -0.4929902 -0.08662278  0.3222752 -0.182754796
## med_high -0.3667214 -0.4131592 -0.3165625 -0.31123316  0.1069665  0.006586263
## high     -0.8667899  1.6368728  1.5131579  0.77931510 -0.7624228  0.930586335
##                medv
## low       0.5027587
## med_low   0.0522114
## med_high  0.1861875
## high     -0.6508496
## 
## Coefficients of linear discriminants:
##                 LD1          LD2         LD3
## zn       0.10215810  0.680904262 -0.90716500
## indus    0.05510046 -0.274399994  0.56835020
## chas    -0.10211335 -0.032715738  0.10213969
## nox      0.38245444 -0.763020384 -1.49067121
## rm      -0.08966776 -0.109702920 -0.10531986
## age      0.24208339 -0.269944265 -0.08998948
## dis     -0.05825621 -0.245682770  0.22390670
## rad      3.11149751  0.878096719  0.18595577
## tax      0.01931005  0.121978430  0.17562088
## ptratio  0.11399809  0.002575832 -0.26454735
## black   -0.13115549  0.006079406  0.10871435
## lstat    0.25338996 -0.258510488  0.43171171
## medv     0.21326171 -0.384588859 -0.13661835
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9479 0.0403 0.0118
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(train$crime)

# plot the lda results
plot(lda.fit, dimen = 2, col=classes, pch=classes)
lda.arrows(lda.fit, myscale = 1)

Here we can see the results of the LDA. Each color represents a class of the target variable. The predictor variables are the arrows in the middle of the picture, the length and the direction of the arrow depicting the effect of the predictor. It seems that here the variables rad, zn and nox discriminate/separate the classes the best.

4.5 Predicting with the LDA

Now we use the built LDA model to predict the classes on the test data. LDA calculates the probability of the new observation for belonging in each of the classes and then the observation is classified to the class of the highest probability.

First, let’s save the correct classes and then remove the crime variable.

# save the correct classes from test data
correct_classes <- test$crime
# remove the crime variable from test data
test <- dplyr::select(test, -crime)

# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)
# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       12       9        2    0
##   med_low    1      20        5    0
##   med_high   0       6       15    1
##   high       0       0        0   31

The prediction would have been perfect if all the values were on the diagonal. Certainly this is not the case but the largest values are on the diagonal. There seems to be some mixing with the first three classes but the last class (high) is most correctly predicted. This was to be expected based on the training set figure.

4.6 Distances and clustering

Different distances (e.g. Euclidean or Manhattan) are used to see if observations are similar or dissimilar with each other. Similar observations form clusters which can be found by different methods (e.g. k-means).

Let’s find clusters on the Boston dataset using k-means. First, let’s reload the dataset and standardize it to get comparable distances (Euclidean and Manhattan). Then let’s run the k-means algorithm on the dataset.

# reload Boston from MASS
library(MASS)
library(ggplot2)
data('Boston');
# center and standardize variables
boston_scaled <- scale(Boston)

# euclidean distance matrix
dist_eu <- dist(boston_scaled)
# look at the summary of the distances
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970
# manhattan distance matrix
dist_man <- dist(boston_scaled,method="manhattan")
# look at the summary of the distances
summary(dist_man)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.2662  8.4832 12.6090 13.5488 17.7568 48.8618

On can see that the Manhattan distance gives much larger values compared to the Euclidean distnace. For now, however, let’s use the Euclidean distance.

# k-means clustering
km1 <-kmeans(boston_scaled, centers = 1)
km2 <-kmeans(boston_scaled, centers = 2)
km3 <-kmeans(boston_scaled, centers = 3)
km4 <-kmeans(boston_scaled, centers = 4)
# plot the Boston dataset with clusters
pairs(boston_scaled, col = km1$cluster) # 1 cluster

pairs(boston_scaled, col = km2$cluster) # 2 clusters

pairs(boston_scaled, col = km3$cluster) # 3 clusters

pairs(boston_scaled, col = km4$cluster) # 4 clusters

# too general view, make smaller
pairs(boston_scaled[,6:10], col = km2$cluster) # 2 clusters

pairs(boston_scaled[,6:10], col = km3$cluster) # 3 clusters

pairs(boston_scaled[,6:10], col = km4$cluster) # 4 clusters

Different number of centers (1,2,3 or 4) were used for k-means clustering. One cluster seemed to be too few, since new clusters started appearing, however, four clusters did not bring a dramatic difference to the game (i.e. the centroid and the clusters did not change). Thus the optimal number was found to be 2 or 3 clusters.

set.seed(123)
# determine the number of clusters
k_max <- 10
# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})

# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')+scale_x_continuous(breaks = 1:10,labels=1:10)

The total within sum of squares (TWSS) would indicate 2 be the optimal number, since that is the number when TWSS changes radically (from 1 to 2).

4.7 3D LDA plot

# Run the code below for the (scaled) train data that you used to fit the LDA. 
#The code creates a matrix product, which is a projection of the data points.
model_predictors <- dplyr::select(train, -crime)

# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)

# Next, install and access the plotly package. Create a 3D plot (**Cool!**)
# of the columns of the matrix product by typing the code below."

library(plotly)
# Note! To install plotly in Linux, remember to install libcurl from terminal.
# * deb: libcurl4-openssl-dev (Debian, Ubuntu, etc)
# * rpm: libcurl-devel (Fedora, CentOS, RHEL)
# * csw: libcurl_dev (Solaris)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color=train$crime)
## Warning: `arrange_()` is deprecated as of dplyr 0.7.0.
## Please use `arrange()` instead.
## See vignette('programming') for more help
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.

5 Dimensionality reduction techniques

Describe the work you have done this week and summarize your learning.

5.1 Introducing the dataset

This chapter’s dataset originates from the United Nations Development Programme. The Human Development Index (HDI) was created for assessig the development of a country in another way than just economic growth. More information can be found on their general data page and on pdf about calculating the human development indices.

# read the human data, row names as first column
human <- read.csv("./data/human.csv", row.names=1)
str(human)
## 'data.frame':    155 obs. of  8 variables:
##  $ Edu2.FM  : num  1.007 0.997 0.983 0.989 0.969 ...
##  $ Labo.FM  : num  0.891 0.819 0.825 0.884 0.829 ...
##  $ Life.Exp : num  81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
##  $ Edu.Exp  : num  17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
##  $ GNI      : int  64992 42261 56431 44025 45435 43919 39568 52947 42155 32689 ...
##  $ Mat.Mor  : int  4 6 6 5 6 7 9 28 11 8 ...
##  $ Ado.Birth: num  7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
##  $ Parli.F  : num  39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...

The dataset consists 155 observations (i.e. countries) in 8 variables:

  1. Edu2.FM – Ratio of secondary education in females compared to males (numeric: from 0.1717 to 1.4967)
  2. Labo.FM – Ratio of labour force participation rate in females compared to males (numeric: from 0.1857 to 1.0380)
  3. Life.Exp – Life expectancy at birth (numeric: from 49 to 83.5)
  4. Edu.Exp – Expected years of schooling (numeric: from 5.4 to 20.2)
  5. GNI – Gross national income per capita (numeric: from 581 to 123124)
  6. Mat.Mor– Maternal mortality ratio (numeric: from 1 to 1100)
  7. Ado.Birth – Adolescent birth rate (numeric: 0.6 to 71.95)
  8. Parli.F – Percentage of female representatives in parliament (numeric: from 0 to 57.5)

The data combines several indicators for the countries:

  • Country: name of the country as row name
  • Health and knowledge: GNI, Life.Exp, Edu.Exp, Mat.Mor, Ado.Birth
  • Empowerment: Parli.F, Edu2.FM,Labo.FM

Most of the variable names have been shortened from the original data and two new variables (Edu2.FM and Labo.FM) were computed.

5.2 Graphical overview and summaries of the variables

library(ggplot2) # for graphics
library(GGally)
library(corrplot)
library(dplyr)
summary(human)
##     Edu2.FM          Labo.FM          Life.Exp        Edu.Exp     
##  Min.   :0.1717   Min.   :0.1857   Min.   :49.00   Min.   : 5.40  
##  1st Qu.:0.7264   1st Qu.:0.5984   1st Qu.:66.30   1st Qu.:11.25  
##  Median :0.9375   Median :0.7535   Median :74.20   Median :13.50  
##  Mean   :0.8529   Mean   :0.7074   Mean   :71.65   Mean   :13.18  
##  3rd Qu.:0.9968   3rd Qu.:0.8535   3rd Qu.:77.25   3rd Qu.:15.20  
##  Max.   :1.4967   Max.   :1.0380   Max.   :83.50   Max.   :20.20  
##       GNI            Mat.Mor         Ado.Birth         Parli.F     
##  Min.   :   581   Min.   :   1.0   Min.   :  0.60   Min.   : 0.00  
##  1st Qu.:  4198   1st Qu.:  11.5   1st Qu.: 12.65   1st Qu.:12.40  
##  Median : 12040   Median :  49.0   Median : 33.60   Median :19.30  
##  Mean   : 17628   Mean   : 149.1   Mean   : 47.16   Mean   :20.91  
##  3rd Qu.: 24512   3rd Qu.: 190.0   3rd Qu.: 71.95   3rd Qu.:27.95  
##  Max.   :123124   Max.   :1100.0   Max.   :204.80   Max.   :57.50
ggpairs(human, upper = list(continuous = wrap("cor", family="sans"))) # graphical overview

There is much skewness present in the variable, i.e. only Edu2.FM and Edu.Exp are somewhat normally distributed. Many variables are strongly correlated as implied by the statistically significant correlation coefficients. The correlation can be visualized better with correlation plot:

# compute the correlation matrix and visualize it with corrplot
cor(human) %>% corrplot(type="lower") # lower correlation matrix, since symmetric

The correlation matrix gives us an idea of the relation of the variables, i.e. there seem to be great

  • negative correlations between maternal mortality ratio and life expectency and expected years of schooling and ratio of secondary education in females/males (Mat.Mor and Life.Exp&Edu.Exp&Edu2.FM). Similarly with Adolescent birth rate Ado.Birth.

  • positive correlations between Education expectency and Life expectency and Adolescent birth rate and Maternal mortality

Percentage of female representatives in parliament (Parli.F) and Ratio of labour force participation rate in females compared to males (Labo.FM) don’t seem to be correlated with any variable. There is, however, a slight correlation between them.

5.3 Principal Component Analysis

###

# modified human is available

# standardize the variables
human_std <- scale(human)

# print out summaries of the standardized variables
summary(human_std)
##     Edu2.FM           Labo.FM           Life.Exp          Edu.Exp       
##  Min.   :-2.8189   Min.   :-2.6247   Min.   :-2.7188   Min.   :-2.7378  
##  1st Qu.:-0.5233   1st Qu.:-0.5484   1st Qu.:-0.6425   1st Qu.:-0.6782  
##  Median : 0.3503   Median : 0.2316   Median : 0.3056   Median : 0.1140  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5958   3rd Qu.: 0.7350   3rd Qu.: 0.6717   3rd Qu.: 0.7126  
##  Max.   : 2.6646   Max.   : 1.6632   Max.   : 1.4218   Max.   : 2.4730  
##       GNI             Mat.Mor          Ado.Birth          Parli.F       
##  Min.   :-0.9193   Min.   :-0.6992   Min.   :-1.1325   Min.   :-1.8203  
##  1st Qu.:-0.7243   1st Qu.:-0.6496   1st Qu.:-0.8394   1st Qu.:-0.7409  
##  Median :-0.3013   Median :-0.4726   Median :-0.3298   Median :-0.1403  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.3712   3rd Qu.: 0.1932   3rd Qu.: 0.6030   3rd Qu.: 0.6127  
##  Max.   : 5.6890   Max.   : 4.4899   Max.   : 3.8344   Max.   : 3.1850
# perform principal component analysis (with the SVD method)
pca_human <- prcomp(human_std)

# draw a biplot of the principal component representation and the original variables
biplot(pca_human, choices = 1:2, cex=c(0.8,1), col=c("grey40","deeppink2"))

###

# pca_human, dplyr are available

# create and print out a summary of pca_human
s <- summary(pca_human)
s
## Importance of components:
##                           PC1    PC2     PC3     PC4     PC5     PC6     PC7
## Standard deviation     2.0708 1.1397 0.87505 0.77886 0.66196 0.53631 0.45900
## Proportion of Variance 0.5361 0.1624 0.09571 0.07583 0.05477 0.03595 0.02634
## Cumulative Proportion  0.5361 0.6984 0.79413 0.86996 0.92473 0.96069 0.98702
##                            PC8
## Standard deviation     0.32224
## Proportion of Variance 0.01298
## Cumulative Proportion  1.00000
# rounded percetanges of variance captured by each PC
pca_pr <- round(100*s$importance[2, ], digits = 1)

# print out the percentages of variance
pca_pr
##  PC1  PC2  PC3  PC4  PC5  PC6  PC7  PC8 
## 53.6 16.2  9.6  7.6  5.5  3.6  2.6  1.3
# create object pc_lab to be used as axis labels
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")

# draw a biplot
biplot(pca_human, cex = c(0.8, 1), col = c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2])

###

# the tea dataset and packages FactoMineR, ggplot2, dplyr and tidyr are available
library(tidyr)
library(FactoMineR)
data("tea") # Load data tea from FactoMineR package
# column names to keep in the dataset
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")

# select the 'keep_columns' to create a new dataset
tea_time <- select(tea, one_of(keep_columns))

# look at the summaries and structure of the data
summary(tea_time)
##         Tea         How                      how           sugar    
##  black    : 74   alone:195   tea bag           :170   No.sugar:155  
##  Earl Grey:193   lemon: 33   tea bag+unpackaged: 94   sugar   :145  
##  green    : 33   milk : 63   unpackaged        : 36                 
##                  other:  9                                          
##                   where           lunch    
##  chain store         :192   lunch    : 44  
##  chain store+tea shop: 78   Not.lunch:256  
##  tea shop            : 30                  
## 
str(tea_time)
## 'data.frame':    300 obs. of  6 variables:
##  $ Tea  : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How  : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ how  : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ sugar: Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ where: Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ lunch: Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
# visualize the dataset
gather(tea_time) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))
## Warning: attributes are not identical across measure variables;
## they will be dropped

###

# tea_time is available

# multiple correspondence analysis
mca <- MCA(tea_time, graph = FALSE)

# summary of the model
summary(mca)
## 
## Call:
## MCA(X = tea_time, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6   Dim.7
## Variance               0.279   0.261   0.219   0.189   0.177   0.156   0.144
## % of var.             15.238  14.232  11.964  10.333   9.667   8.519   7.841
## Cumulative % of var.  15.238  29.471  41.435  51.768  61.434  69.953  77.794
##                        Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.141   0.117   0.087   0.062
## % of var.              7.705   6.392   4.724   3.385
## Cumulative % of var.  85.500  91.891  96.615 100.000
## 
## Individuals (the 10 first)
##                       Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                  | -0.298  0.106  0.086 | -0.328  0.137  0.105 | -0.327
## 2                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 3                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 4                  | -0.530  0.335  0.460 | -0.318  0.129  0.166 |  0.211
## 5                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 6                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 7                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 8                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 9                  |  0.143  0.024  0.012 |  0.871  0.969  0.435 | -0.067
## 10                 |  0.476  0.271  0.140 |  0.687  0.604  0.291 | -0.650
##                       ctr   cos2  
## 1                   0.163  0.104 |
## 2                   0.735  0.314 |
## 3                   0.062  0.069 |
## 4                   0.068  0.073 |
## 5                   0.062  0.069 |
## 6                   0.062  0.069 |
## 7                   0.062  0.069 |
## 8                   0.735  0.314 |
## 9                   0.007  0.003 |
## 10                  0.643  0.261 |
## 
## Categories (the 10 first)
##                        Dim.1     ctr    cos2  v.test     Dim.2     ctr    cos2
## black              |   0.473   3.288   0.073   4.677 |   0.094   0.139   0.003
## Earl Grey          |  -0.264   2.680   0.126  -6.137 |   0.123   0.626   0.027
## green              |   0.486   1.547   0.029   2.952 |  -0.933   6.111   0.107
## alone              |  -0.018   0.012   0.001  -0.418 |  -0.262   2.841   0.127
## lemon              |   0.669   2.938   0.055   4.068 |   0.531   1.979   0.035
## milk               |  -0.337   1.420   0.030  -3.002 |   0.272   0.990   0.020
## other              |   0.288   0.148   0.003   0.876 |   1.820   6.347   0.102
## tea bag            |  -0.608  12.499   0.483 -12.023 |  -0.351   4.459   0.161
## tea bag+unpackaged |   0.350   2.289   0.056   4.088 |   1.024  20.968   0.478
## unpackaged         |   1.958  27.432   0.523  12.499 |  -1.015   7.898   0.141
##                     v.test     Dim.3     ctr    cos2  v.test  
## black                0.929 |  -1.081  21.888   0.382 -10.692 |
## Earl Grey            2.867 |   0.433   9.160   0.338  10.053 |
## green               -5.669 |  -0.108   0.098   0.001  -0.659 |
## alone               -6.164 |  -0.113   0.627   0.024  -2.655 |
## lemon                3.226 |   1.329  14.771   0.218   8.081 |
## milk                 2.422 |   0.013   0.003   0.000   0.116 |
## other                5.534 |  -2.524  14.526   0.197  -7.676 |
## tea bag             -6.941 |  -0.065   0.183   0.006  -1.287 |
## tea bag+unpackaged  11.956 |   0.019   0.009   0.000   0.226 |
## unpackaged          -6.482 |   0.257   0.602   0.009   1.640 |
## 
## Categorical variables (eta2)
##                      Dim.1 Dim.2 Dim.3  
## Tea                | 0.126 0.108 0.410 |
## How                | 0.076 0.190 0.394 |
## how                | 0.708 0.522 0.010 |
## sugar              | 0.065 0.001 0.336 |
## where              | 0.702 0.681 0.055 |
## lunch              | 0.000 0.064 0.111 |
# visualize MCA
plot(mca, invisible=c("ind"), habillage = "quali")


(more chapters to be added similarly as we proceed with the course!)